GSNs: generative stochastic networks

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

GSNs : Generative Stochastic Networks

We introduce a novel training principle for generative probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework generalizes Denoising Auto-Encoders (DAE) and is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution is a conditiona...

متن کامل

Deep Generative Stochastic Networks Trainable by Backprop

We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally invo...

متن کامل

Variational Generative Stochastic Networks with Collaborative Shaping

We develop an approach to training generative models based on unrolling a variational autoencoder into a Markov chain, and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over...

متن کامل

Multimodal Transitions for Generative Stochastic Networks

Generative Stochastic Networks (GSNs) have been recently introduced as an alternative to traditional probabilistic modeling: instead of parametrizing the data distribution directly, one parametrizes a transition operator for a Markov chain whose stationary distribution is an estimator of the data generating distribution. The result of training is therefore a machine that generates samples throu...

متن کامل

Supplemental Material for: Deep Generative Stochastic Networks Trainable by Backprop

Let Pθn(X|X̃) be a denoising auto-encoder that has been trained on n training examples. Pθn(X|X̃) assigns a probability to X , given X̃ , when X̃ ∼ C(X̃|X). This estimator defines a Markov chain Tn obtained by sampling alternatively an X̃ from C(X̃|X) and an X from Pθ(X|X̃). Let πn be the asymptotic distribution of the chain defined by Tn, if it exists. The following theorem is proven by Bengio et al. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Information and Inference

سال: 2016

ISSN: 2049-8764,2049-8772

DOI: 10.1093/imaiai/iaw003